首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   227篇
  免费   2篇
  国内免费   7篇
电工技术   3篇
综合类   7篇
化学工业   7篇
金属工艺   2篇
机械仪表   5篇
建筑科学   5篇
能源动力   4篇
轻工业   1篇
水利工程   3篇
石油天然气   1篇
无线电   22篇
一般工业技术   27篇
冶金工业   2篇
原子能技术   1篇
自动化技术   146篇
  2024年   1篇
  2023年   2篇
  2022年   1篇
  2021年   5篇
  2020年   5篇
  2019年   9篇
  2018年   5篇
  2017年   6篇
  2016年   14篇
  2015年   13篇
  2014年   9篇
  2013年   21篇
  2012年   10篇
  2011年   20篇
  2010年   15篇
  2009年   10篇
  2008年   18篇
  2007年   21篇
  2006年   18篇
  2005年   5篇
  2004年   7篇
  2002年   1篇
  2001年   4篇
  2000年   1篇
  1999年   2篇
  1998年   3篇
  1996年   1篇
  1995年   2篇
  1994年   3篇
  1993年   2篇
  1985年   1篇
  1982年   1篇
排序方式: 共有236条查询结果,搜索用时 0 毫秒
91.
Model selection and model averaging are two important techniques to obtain practical and useful models in applied research. However, it is now well-known that many complex issues arise, especially in the context of model selection, when the stochastic nature of the selection process is ignored and estimates, standard errors, and confidence intervals are calculated as if the selected model was known a priori. While model averaging aims to incorporate the uncertainty associated with the model selection process by combining estimates over a set of models, there is still some debate over appropriate interpretation and confidence interval construction. These problems become even more complex in the presence of missing data and it is currently not entirely clear how to proceed. To deal with such situations, a framework for model selection and model averaging in the context of missing data is proposed. The focus lies on multiple imputation as a strategy to deal with the missingness: a consequent combination with model averaging aims to incorporate both the uncertainty associated with the model selection and with the imputation process. Furthermore, the performance of bootstrapping as a flexible extension to our framework is evaluated. Monte Carlo simulations are used to reveal the nature of the proposed estimators in the context of the linear regression model. The practical implications of our approach are illustrated by means of a recent survival study on sputum culture conversion in pulmonary tuberculosis.  相似文献   
92.
风向传感器是自动气象站的重要组成元件,由于其运行数据存在的噪声与缺失问题导致测量误差增加,因此设计基于非线性拟合的自动气象站风向传感器校准控制方法。分析自动气象站风向传感器的组成结构和工作原理,构建自动气象站等效模型。采用循环采集的方式获取风向传感器的实时运行数据,通过滤波、缺失补偿等步骤完成数据预处理。以数据预处理结合为基础,利用非线性拟合技术计算风向传感器误差,通过装设风向传感器校准控制器,完成风向传感器校准控制。实验结果表明,所设计方法与传统校准控制方法相比传感器的测量误差降低了0.759°,误差变化率有所下降,即证明了该方法的自动气象站风向传感器校准控制效果好。  相似文献   
93.
This paper describes a robust centroid method, which is a variant of principal component analysis. A genetic local search algorithm is presented to perform the calculations. Simulations are carried out to appraise the performance of the genetic local search algorithm. A real data set with missing data and multiple outliers is analyzed.  相似文献   
94.
何云  皮德常 《计算机科学》2015,42(11):251-255, 283
基因表达数据时常出现缺失,阻碍了对基因表达的研究。提出了一种新的相似性度量方案——精简关联度,在此基础上,又提出了基于精简关联度的缺失数据迭代填补算法(RKNNimpute)。精简关联度是对灰色关联度的一种改进,能达到与灰色关联度同样的效果,却显著降低了算法的时间复杂度。RKNNimpute算法以精简关联度作为相似度量,将填补后的基因扩充到近邻的候选基因集,通过迭代的方式填补其他缺失数据,提高了算法的填补效果和性能。选用时序、非时序、混合等不同类型的基因表达数据集进行了大量实验来评估RKNNimpute算法的性能。实验结果表明,精简关联度是一种高效的距离度量方法,所提出的RKNNimpute算法优于常规填补算法。  相似文献   
95.
96.
In this paper we introduce a general extreme-value regression model and derive Cox and Snell’s (1968) general formulae for second-order biases of maximum likelihood estimates (MLEs) of the parameters. We obtain formulae which can be computed by means of weighted linear regressions. Furthermore, we give the skewness of order n−1/2 of the maximum likelihood estimators of the parameters by using Bowman and Shenton’s (1988) formula. A simulation study with results obtained with the use of Cox and Snell’s (1968) formulae is discussed. Practical uses of this model and of the derived formulae for bias correction are also presented.  相似文献   
97.
对软磁盘产生漏脉冲的原因及其减少措施作了简单介绍,着重对软磁盘的冒脉冲的产生原因作了详细分析,并对软磁盘生产中怎么减少冒脉冲以提高产品合格率作了总结。  相似文献   
98.
Automatic speech recognition (ASR) has reached a very high level of performance in controlled situations. However, the performance degrades drastically when environmental noise occurs during recognition. Nowadays, the major challenge is to reach a good robustness to adverse conditions. Missing data recognition has been developed to deal with this challenge. Unlike other denoising methods, missing data recognition does not match the whole data with the acoustic models, but instead considers part of the signal as missing, i.e. corrupted by noise. The main challenge of this approach is to identify accurately missing parts (also called masks). The work reported here focuses on this issue. We start from developing Bayesian models of the masks, where every spectral feature is classified as reliable or masked, and is assumed independent of the rest of the signal. This classification strategy results in sparse and isolated masked features, like the squares of a chess-board, while oracle reliable and unreliable features tend to be clustered into consistent time–frequency blocks. We then propose to take into account frequency and temporal dependencies in order to improve the masks’ estimation accuracy. Integrating such dependencies leads to a new architecture of a missing data mask estimator. The proposed classifier has been evaluated on the noisy Aurora2 (digits recognition) and Aurora4 (continuous speech) databases. Experimental results show a significant improvement of recognition accuracy when these dependencies are considered.  相似文献   
99.
The control of blast furnace ironmaking process requires model of process dynamics accurate enough to facilitate the control strategies. However, data sets collected from blast furnace contain considerable number of missing values and outliers. These values can significantly affect subsequent statistical analysis and thus the identification of the whole process, so it becomes much important to deal with these values. This paper considers a data processing procedure including missing value imputation and outlier detection, and examines the impact of processing to the identification of blast furnace ironmaking process. Missing values are imputed based on the decision tree algorithm and outliers are identified and discarded through a set of multivariate outlier detection methods. The data sets before and after processing are then used for identification. Two classic identification methods, N4SID (numerical algorithms for state space subspace system identification) and PEM (prediction error method) are considered and a comparative study is presented.  相似文献   
100.
When continuous predictors are present, classical Pearson and deviance goodness-of-fit tests to assess logistic model fit break down. The Hosmer-Lemeshow test can be used in these situations. While simple to perform and widely used, it does not have desirable power in many cases and provides no further information on the source of any detectable lack of fit. Tsiatis proposed a score statistic to test for covariate regional effects. While conceptually elegant, its lack of a general rule for how to partition the covariate space has, to a certain degree, limited its popularity. We propose a new method for goodness-of-fit testing that uses a very general partitioning strategy (clustering) in the covariate space and either a Pearson statistic or a score statistic. Properties of the proposed statistics are discussed, and a simulation study demonstrates increased power to detect model misspecification in a variety of settings. An application of these different methods on data from a clinical trial illustrates their use. Discussions on further improvement of the proposed tests and extending this new method to other data situations, such as ordinal response regression models are also included.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号